Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Not local inference (Score 1) 56

I was basing my comment about local model support on this:
https://github.com/moltbot/moltbot/issues/2838

Clawdbot currently supports model providers, but configuring local inference engines such as vLLM and Ollama is not straightforward or fully documented. Users running local LLMs (GPU / on-prem / WSL) face friction when attempting to integrate these providers reliably.

Adding official support for vLLM and Ollama as first-class providers would significantly improve local deployment, performance, and developer experience.

So it sounds like it is in the realm of possibility, but being neither documented nor straightforward sounds beyond the reach of most normal users.

Comment Not local inference (Score 3, Interesting) 56

Just to be clear here, Moltbot does not run AI inference locally. You connect it to your standard AI services (ChatGPT, Gemini, etc), which do the actual AI processing. What Moltbot does is connect those things to other things, like to Whatsapp.

In fact, even if you do have your own local inference engine running, like a llama model, Moltbot can't work with it currently. It ONLY works with the big AI services.

It really is just glue to connects things together, and is so lightweight it even runs on a Raspberry Pi with 2GB of ram. So I'm not sure what all the Mac Mini hubbub is about. The ability to run this on Amazon's Free Tier shows just how lightweight and little processing it does (it's just formatting and moving chat messages from one thing to another basically).

To earlier commenters saying that Peter Steinberger is missing the entire point of running locally when he recommends AWS - you aren't understanding what Moltbot is doing. If you're already committed to using online services for the fundamental AI inference itself, it doesn't matter that Moltbot is running in the cloud too.

Comment Splitting hairs (Score 1) 107

I've not bothered to read the actual lawsuit, but most likely Whatsapp really is fully end-to-end encrypted. So if the lawsuit is asserting what the summary says, it will fail.

However, if Meta really can read messages, then they have copies of all the private client-side keys stored on their own systems, thus they can decrypt any message they want (and potentially be hacked and expose all the keys as well).

In other words the communications are protected against any man-in-the-middle, except Meta that is.

Comment Re:Good luck with that (Score 1) 123

Yeah, and I'd like to know how they're going to regulate interstate travel in that regard, as far as ordering a 3D printer from China and expecting it to have this restriction.

The thing with so many of these gun laws is they're going after more "exotic" or esoteric things, when the vast majority of gun deaths are from run of the mill pistols that are everywhere.

Comment Doesn't make sense (Score 4, Interesting) 32

This doesn't make sense. They just need to download the data from the torrents. You don't have to ask Anna's Archive for this. It might take a number of days, but surely they would want their own local copies and not try and access it "on demand". Also, Meta is already known to have downloaded their data a year ago

Comment Very misleading headline (Score 5, Informative) 72

This is a very misleading headline based on what the article says. The summary makes it sound like the Pentagon had this device in Havana and their own device caused this syndrome. That's not the case at all.

The Pentagon purchased the device as part of their investigation, and they tested the device and determined it could be the cause of the injuries and symptoms. It contains Russian components, but that doesn't mean that Russia deployed it or even made the complete device (although that is likely).

Slashdot Top Deals

Never buy from a rich salesman. -- Goldenstern

Working...